L1-norm Penalised Orthogonal Forward Regression

نویسندگان

  • Xia Hong
  • Sheng Chen
  • Yi Guo
  • Junbin Gao
چکیده

A l-norm penalized orthogonal forward regression (l-POFR) algorithm is proposed based on the concept of leaveone-out mean square error (LOOMSE). Firstly, a new l-norm penalized cost function is defined in the constructed orthogonal space, and each orthogonal basis is associated with an individually tunable regularization parameter. Secondly, due to orthogonal computation, the LOOMSE can be analytically computed without actually splitting the data set, and moreover a closed form of the optimal regularization parameter in terms of minimal LOOMSE is derived. Thirdly, a lower bound for regularization parameters is proposed, which can be used for robust LOOMSE estimation by adaptively detecting and removing regressors to an inactive set so that the computational cost of the algorithm is significantly reduced. Illustrative examples are included to demonstrate the effectiveness of this new l-POFR approach.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

- norm penalised orthogonal forward regression

Xia Hong, Sheng Chen, Yi Guo and Junbin Gao Department of Computer Science, School of Mathematical, Physical and Computational Sciences, University of Reading, Reading, UK; Electronics and Computer Science, University of Southampton, Southampton, UK; Department of Electrical and Comptuer Engineering , Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia; CSIRO Mathematics and...

متن کامل

Fitting Two Concentric Circles and Spheres to Data by l1 Orthogonal Distance Regression

The problem of fitting two concentric circles and spheres to data arise in computational metrology. The most commonly used criterion for this is the Least Square norm. There is also interest in other criteria, and here we focus on the use of the l1 norm, which is traditionally regarded as important when the data contain wild points. A common approach to this problem involves an iteration proces...

متن کامل

Covariance selection and estimation via penalised normal likelihood

We propose a nonparametric method to identify parsimony and to produce a statistically efficient estimator of a large covariance matrix. We reparameterise a covariance matrix through the modified Cholesky decomposition of its inverse or the one-step-ahead predictive representation of the vector of responses and reduce the nonintuitive task of modelling covariance matrices to the familiar task o...

متن کامل

Least Squares Optimization with L1-Norm Regularization

This project surveys and examines optimization approaches proposed for parameter estimation in Least Squares linear regression models with an L1 penalty on the regression coefficients. We first review linear regression and regularization, and both motivate and formalize this problem. We then give a detailed analysis of 8 of the varied approaches that have been proposed for optimizing this objec...

متن کامل

Forward stagewise regression and the monotone lasso

Abstract: We consider the least angle regression and forward stagewise algorithms for solving penalized least squares regression problems. In Efron, Hastie, Johnstone & Tibshirani (2004) it is proved that the least angle regression algorithm, with a small modification, solves the lasso regression problem. Here we give an analogous result for incremental forward stagewise regression, showing tha...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Int. J. Systems Science

دوره 48  شماره 

صفحات  -

تاریخ انتشار 2017